Goto

Collaborating Authors

 uda method



Supplementary Material -- Towards Reliable Model Selection for Unsupervised Domain Adaptation: An Empirical Study and A Certified Baseline

Neural Information Processing Systems

We first prove the first inequality using Jensen's inequality, which states that for a real-valued, convex Next, we leverage the property of inequalities to prove the second inequality. However, this method has limited effectiveness in scenarios with severe domain shifts between the source and target domains. Directly taking source risk as target risk is unreliable due to domain distribution shifts between domains. This work was completed while Dapeng ( lhxxhb15@gmail.com) Subsequently, Reverse V alidation performs a reversed adaptation from the pseudo-labeled target to the source and utilizes the source risk in this reversed adaptation task for validation.




90cc440b1b8caa520c562ac4e4bbcb51-Paper.pdf

Neural Information Processing Systems

Unsupervised domain adaptation (UDA)enables cross-domain learning without target domain labels by transferring knowledge from a labeled source domain whose distribution differs from that of the target. However, UDA is not always successful and several accounts of'negative transfer' have been reported in the literature.


Towards Reliable Model Selection for Unsupervised Domain Adaptation: An Empirical Study and A Certified Baseline

Neural Information Processing Systems

Selecting appropriate hyperparameters is crucial for unlocking the full potential of advanced unsupervised domain adaptation (UDA) methods in unlabeled target domains. Although this challenge remains under-explored, it has recently garnered increasing attention with the proposals of various model selection methods. Reliable model selection should maintain performance across diverse UDA methods and scenarios, especially avoiding highly risky worst-case selections--selecting the model or hyperparameter with the worst performance in the pool.\textit{Are